To address the user cluster partitioning issue in the deployment strategy of Unmanned Aerial Vehicle (UAV) base stations for auxiliary communication in emergency scenarios, a feature-weighted fuzzy clustering algorithm, named Improved FCM, was proposed by considering both the performance of UAV base stations and user experience. Firstly, to tackle the problem of high computational complexity and convergence difficulty in the partitioning process of user clusters under random distribution conditions, a feature-weighted node data projection algorithm based on distance weighting was introduced according to the performance constraints of signal coverage range and maximum number of served users for each UAV base station. Secondly, to address the effectiveness of user partitioning when the same user falls within the effective ranges of multiple clusters, as well as the maximization of UAV base station resource utilization, a value-weighted algorithm based on user location and UAV base station load balancing was proposed. Experimental results demonstrate that the proposed methods meet the service performance constraints of UAV base stations. Additionally, the deployment scheme based on the proposed methods effectively improves the average load rate and coverage ratio of the system, reaching 0.774 and 0.026 3 respectively, which are higher than those of GFA (Geometric Fractal Analysis), Sp-C (Spectral Clustering), etc.
For insufficient edge weight window threshold design in Text Graph Convolutional Network (Text GCN), to mine the word association structure more accurately and improve prediction accuracy, a fake review detection algorithm combining Gaussian Mixture Model (GMM) and Text GCN named F-Text GCN was proposed. The edge signal strength of fake reviews that are relatively weak compared to normal reviews in training data size was improved by using GMM nature to separate noise edge weight distributions. Additionally, considering the diversity of information sources, the adjacency matrix was constructed by combing documents, words, reviews and non-text features. Finally, the fake review association structure of the adjacency matrix was extracted through spectral decomposition of Text GCN. Validation experiments were performed on 126 086 actual Chinese reviews collected by a large domestic e-commerce platform. Experimental results show that, for detecting fake reviews, the F1 value of F-Text GCN is 82.92%, outperforming BERT (Bidirectional Encoder Representation from Transformers) and Text CNN by 10.46% and 11.60%, respectively, the F1 of F-Text GCN is 2.94% higher than that of Text GCN. For highly imitated fake reviews which are challenging to detect, F-Text GCN achieves the overall prediction accuracy of 94.71% by secondary detection on the samples that Support Vector Machine (SVM) was difficult to detect, which is 2.91% and 14.54% higher than those of Text GCN and SVM. Based on study findings, lexical interference in consumer decision-making is evident in fake reviews’ second-order graph neighbor structure. This result indicates that the proposed algorithm is especially suitable for extracting long-range word collocation structures and global sentence feature pattern variations for fake reviews detection.
Aiming at the high computational complexity and large memory consumption of the existing super-resolution reconstruction networks, a lightweight image super-resolution reconstruction network based on Transformer-CNN was proposed, which made the super-resolution reconstruction network more suitable to be applied on embedded terminals such as mobile platforms. Firstly, a hybrid block based on Transformer-CNN was proposed, which enhanced the ability of the network to capture local-global depth features. Then, a modified inverted residual block, with special attention to the characteristics of the high-frequency region, was designed, so that the improvement of feature extraction ability and reduction of inference time were realized. Finally, after exploring the best options for activation function, the GELU (Gaussian Error Linear Unit) activation function was adopted to further improve the network performance. Experimental results show that the proposed network can achieve a good balance between image super-resolution performance and network complexity, and reaches inference speed of 91 frame/s on the benchmark dataset Urban100 with scale factor of 4, which is 11 times faster than the excellent network called SwinIR (Image Restoration using Swin transformer), indicates that the proposed network can efficiently reconstruct the textures and details of the image and reduce a significant amount of inference time.
Aiming at the problems of inconsistent text style before and after editing and insufficient readability of the generated new text in text image editing tasks, a text image editing method based on the guidance of font and character attributes was proposed. Firstly, the generation direction of text foreground style was guided by the font attribute classifier combined with font classification, perception and texture losses to improve the consistency of text style before and after editing. Secondly, the accurate generation of text glyphs was guided by the character attribute classifier combined with the character classification loss to reduce text artifacts and generation errors, and improve the readability of generated new text. Finally, the end-to-end fine-tuned training strategy was used to refine the generated results for the entire staged editing model. In the comparison experiments with SRNet (Style Retention Network) and SwapText, the proposed method achieves PSNR (Peak Signal-to-Noise Ratio) and SSIM (Structural SIMilarity) of 25.48 dB and 0.842, which are 2.57 dB and 0.055 higher than those of SRNet and 2.11 dB and 0.046 higher than those of SwapText, respectively; the Mean Square Error (MSE) is 0.004 3, which is 0.003 1 and 0.024 lower than that of SRNet and SwapText, respectively. Experimental results show that the proposed method can effectively improve the generation effect of text image editing.
Aiming at the universal No-Reference Image Quality Assessment (NR-IQA) algorithms, a new NR-IQA algorithm based on the saliency deep features of the pseudo reference image was proposed. Firstly, based on the distorted image, the corresponding pseudo reference image of the distorted image generated by ConSinGAN model was used as compensation information of the distorted image, thereby making up for the weakness of NR-IQA methods: lacking real reference information. Secondly, the saliency information of the pseudo reference image was extracted, and the pseudo saliency map and the distorted image were input into VGG16 netwok to extract deep features. Finally, the obtained deep features were merged and mapped into the regression network composed of fully connected layers to obtain a quality prediction consistent with human vision.Experiments were conducted on four large public image datasets TID2013, TID2008, CSIQ and LIVE to prove the effectiveness of the proposed algorithm. The results show that the Spearman Rank-Order Correlation Coefficient (SROCC) of the proposed algorithm on the TID2013 dataset is 5 percentage points higher than that of H-IQA (Hallucinated-IQA) algorithm and 14 percentage points higher than that of RankIQA (learning from Rankings for no-reference IQA) algorithm. The proposed algorithm also has stable performance for the single distortion types. Experimental results indicate that the proposed algorithm is superior to the existing mainstream Full-Reference Image Quality Assessment (FR-IQA) and NR-IQA algorithms, and is consistent with human subjective perception performance.
Mobile Edge Computing (MEC) can reduce the energy consumption of mobile devices and the delay of users’ acquisition to services by deploying resources in users’ neighborhood; however, most relevant caching studies ignore the regional differences of the services requested by users. A cache cooperation strategy for maximizing revenue was proposed by considering the features of requested content in different regions and the dynamic characteristic of content. Firstly, considering the regional features of user preferences, the base stations were partitioned into several collaborative domains, and the base stations in each collaboration domain was able to serve users with the same preferences. Then, the content popularity in each region was predicted by the Auto?Regressive Integrated Moving Average (ARIMA) model and the similarity of the content. Finally, the cache cooperation problem was transformed into a revenue maximization problem, and the greedy algorithm was used to solve the content placement and replacement problems according to the revenue obtained by content storage. Simulation results showed that compared with the Grouping?based and Hierarchical Collaborative Caching (GHCC) algorithm based on MEC, the proposed algorithm improved the cache hit rate by 28% with lower average transmission delay. It can be seen that the proposed algorithm can effectively improve the cache hit rate and reduce the average transmission delay at the same time.
Aiming at the problems that a wide variety of Chinese medicinal materials have small samples, and it is difficult to classify the vessels of them, an improved convolutional neural network method was proposed based on multi-channel color space and attention mechanism model. Firstly, the multi-channel color space was used to merge the RGB color space with other color spaces into 6 channels as the network input, so that the network was able to learn the characteristic information such as brightness, hue and saturation to make up for the insufficient samples. Secondly, the attention mechanism model was added to the network, in which the two pooling layers were connected tightly by the channel attention model, and the multi-scale cavity convolutions were combined by the spatial attention model, so that the network focused on the key feature information in the small samples. Aiming at 8 774 vessel images of 34 samples collected from Chinese medicinal materials, the experimental results show that by using the multi-channel color space and attention mechanism model method, compared with the original ResNet network, the accuracy is increased by 1.8 percentage points and 3.1 percentage points respectively, and the combination of the two methods increases accuracy by 4.1 percentage points. It can be seen that the proposed method greatly improves the accuracy of small-sample classification.
In order to solve the problem of multi-source multi-sink multicast network coding, an algorithm for computing achievable information rate region and an approach for constructing linear network coding scheme were proposed. Based on the previous studies, the multi-source multi-sink multicast network coding problem was transformed into a specific single-source multicast network coding scenario with a constraint at the source node. By theoretical analyses and formula derivation, the constraint relationship among the multicast rate of source nodes was found out. Then a multi-objective optimization model was constructed to describe the boundary of achievable information rate region. Two methods were presented for solving this model. One was the enumeration method, the other was multi-objective optimization method based on genetic algorithm. The achievable information rate region could be derived from Pareto boundary of the multi-objective optimization model. After assigning the multicast rate of source nodes, the linear network coding scheme could be constructed by figuring out the single-source multicast network coding scenario with a constraint. The simulation results show that the proposed methods can find out the boundary of achievable information rate region including integral points and construct linear network coding scheme.
Based on single-source multicast network coding, in order to explore the relationship between multicast rate and the number of minimal needed coding nodes, by employing the technique of generation and extension of linear network coding, theoretical analysis and formula derivation of the relationship were given. It is concluded that the number of the minimal needed coding nodes monotonously increases with the increasing of multicast rate. A multi-objective optimization model was constructed, which accurately described the quantitative relationship between them. For the sake of solving this model, a search strategy was derived to search all feasible coding schemes. By combining the search strategy with NSGA-II, an algorithm for solving this model was presented. In the case of being required to consider the tradeoff between them, the solution of the model is the basis of choice for determining network coding scheme. The proposed algorithm not only can search whole Pareto set, but also search part Pareto set related with certain feasible multicast rate region given by user with less search cost. The simulation results verify the conclusion of theoretical analysis, and indicate that the proposed algorithm is feasible and efficient.
The research of dissemination effect of micro-blog message has an important role in improving marketing, strengthening public opinion monitoring and discovering hotspots accurately. Focused on difference between individuals which was not considered previously, this paper proposed a method of predicting scale and depth of retweeting based on behavior analysis. This paper presented a predictive model of retweet behavior with Logistic Regression (LR) algorithm and extracted nine relative features from users, relationship and content. Based on this model, this paper proposed the above predicting method which considered the character of information disseminating along users and iterative statistical analysis of adjacent users step by step. The experimental results on Sina micro-blog dataset show that the accuracy rate of scale and depth prediction approximates 87.1% and 81.6 respectively, which can predict the dissemination effect well.
For the vacancies on digital watermarking technology based on 2D-vector animation, this paper proposed a blind watermarking scheme which made full use of vector characteristics and the timing characteristics. This scheme adopted color values of adjacent frames in vector animation changed elements as embedded target. And it used Least Significant Bit(LSB) algorithm as embedding/extraction algorithm, which embedded multiple group watermarks to vector animation. Finally the accurate watermark could be obtained by verifying the extracted multiple group watermarks. Theoretical analysis and experimental results show that this scheme is not only easy to implement and well in robustness, but also can realize tamper-proofing. What's more, the vector animation can be played in real-time during the watermark embedding and extraction.
Fault diagnosis design based on qualitative constraint
Based on the QSIM algorithm of B. J. Kuipers, comparative constraint was presented to reduce the diagnosis space. The transfer regulation of qualitative constrain was used in the simulation and reasoning of system diagnosis. This algorithm took observed faulty state as the beginning, to diagnosis the location and cause of variable discrepancy according to the transfer regulation of qualitative constraint, reasoned from diagnosed faulty to examine the result and deleted the redundancy of diagnosis result. In the example of condensation refrigeration system, the constraint relation was built according to qualitative difference equation. Aim at the bad efficiency of refrigeration, the mixture of air or Freon superfluously was diagnosed as the result faulty source, consistent with the factual system.